約 4,166,126 件
https://w.atwiki.jp/usb_audio/pages/34.html
原文:Audio Device Document 1.0(PDF) USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 31 Table 3-1 Status Word Format Offset Field Size Value Description 0 bStatusType 1 Bitmap D7 Interrupt PendingD6 Memory Contents ChangedD5..4 ReservedD3..0 Originator0 = AudioControl interface1 = AudioStreaming interface2 = AudioStreaming endpoint3..15 = Reserved 1 bOriginator 1 Number ID of the Terminal, Unit, interface, orendpoint that reports the interrupt. 3.7.2 AudioStreaming Interface AudioStreaming interfaces are used to interchange digital audio data streams between the Host and the audio function. They are optional. An audio function can have zero or more AudioStreaming interfaces associated with it, each possibly carrying data of a different nature and format. Each AudioStreaming interface can have at most one isochronous data endpoint. This construction guarantees a one-to-one relationship between the AudioStreaming interface and the single audio data stream, related to the endpoint. In some cases, the isochronous data endpoint is accompanied by an associated isochronous synch endpoint for synchronization purposes. The isochronous data endpoint is required to be the first endpoint in the AudioStreaming interface. The synch endpoint always follows its associated data endpoint. An AudioStreaming interface can have alternate settings that can be used to change certain characteristics of the interface and underlying endpoint. A typical use of alternate settings is to provide a way to change the bandwidth requirements an active AudioStreaming interface imposes on the USB. By incorporating a low-bandwidth or even zero-bandwidth alternate setting for each AudioStreaming interface, a device offers to the Host software the option to temporarily relinquish USB bandwidth by switching to this lowbandwidth alternate setting. If such an alternate setting is implemented, it must be the default alternate setting (alternate setting zero). A zero-bandwidth alternate setting can be implemented by specifying zero endpoints in the standard AudioStreaming interface descriptor. All other interface and endpoint descriptors (both standard and class-specific) need not be specified in this case. The AudioStreaming interface is essentially used to provide an access point for the Host software (drivers) to manipulate the behavior of the physical interface it represents. Therefore, even external connections to the audio function (S/PDIF interface, analog input, etc.) can be represented by an AudioStreaming interface so that the Host software can control certain aspects of those connections. This type of AudioStreaming interface has no associated USB endpoints. The related audio data stream is not using USB as a transport medium. In addition, the concepts of dynamic interfaces as described in the Universal Serial Bus Class Specification can be used to notify the Host software that changes have occurred on the external connection. This is analogous to switching alternate settings on an AudioStreaming interface with USB endpoints, except that the switch is now device-initiated instead of Host-initiated. As an example, consider an S/PDIF connection to an audio function. If nothing is connected to this external S/PDIF interface, the AudioStreaming interface is idle and reports itself as being dynamic and non-configured (bInterfaceClass=0x00). If the user connects a standard IEC958 signal to the audio function, the S/PDIF receiver inside the audio function detects this and notifies the Host that the AudioStreaming interface has switched to its IEC958 mode (alternate setting x). If, on the other hand, an USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 32 IEC1937 signal, carrying MPEG-encoded audio is connected, the AudioStreaming interface switches to the appropriate setting (alternate setting y) to handle the MPEG decoding process. For every isochronous OUT or IN endpoint defined in any of the AudioStreaming interfaces, there must be a corresponding Input or Output Terminal defined in the audio function. For the Host to fully understand the nature and behavior of the connection, it must take into account the interface- and endpoint-related descriptors as well as the Terminal-related descriptor. 3.7.2.1 Isochronous Audio Data Stream Endpoint In general, the data streams that are handled by an isochronous audio data endpoint do not necessarily map directly to the logical channels that exist within the audio function. As an example, consider a “stereo” audio data stream that contains audio data, encoded in Dolby Prologic format. Although there is only one data stream, carrying interleaved samples for Left and Right (or more precisely LT and RT), these two channels carry information for four logical channels (Left, Right, Center, and Surround). Other examples include cases in which multiple logical audio channels are compressed into a single data stream. The format of such a data stream can be entirely different from the native format of the logical channels (for example, 256 Kbits/s MPEG1 stereo audio as opposed to 176.4 Kbytes/s 16 bit stereo 44.1 kHz audio). Therefore, to describe the data transfer at the endpoint level correctly, the notion of logical channel is replaced by the notion of audio data stream. It is the responsibility of the AudioStreaming interface which contains the OUT endpoint to convert between the audio data stream and the embedded logical channels before handing the data over to the Input Terminal. In many cases, this conversion process involves some form of decoding. Likewise, the AudioStreaming interface which contains the IN endpoint must convert logical channels from the Output Terminal into an audio data stream, often using some form of encoding. Consequently, requests to control properties that exist within an audio function, such as volume or mute cannot be sent to the endpoint in an AudioStreaming interface. An AudioStreaming interface operates on audio data streams and is unaware of the number of logical channels it eventually serves. Instead, these requests must be directed to the proper audio function’s Units or Terminals via the AudioControl interface. As already mentioned, an AudioStreaming interface can have zero or one isochronous audio data endpoint. If multiple synchronous audio channels must be communicated between Host and audio function, they must be clustered into one audio channel cluster by interleaving the individual audio data, and the result can be directed to the single endpoint. Furthermore, a single synch endpoint, if needed, can service the entire cluster. In this way, a minimum number of endpoints are consumed to transport related data streams. If an audio function needs more than one cluster to operate, each cluster is directed to the endpoint of a separate AudioStreaming interface, belonging to the same Audio Interface Collection (all servicing the same audio function). If there is a need to manipulate a number of AudioStreaming interfaces as a whole, these interfaces can be tied together. The techniques for associating interfaces, described in the Universal Serial Bus Class Specification should be used to create the binding. 3.7.2.2 Isochronous Synch Endpoint For adaptive audio source endpoints and asynchronous audio sink endpoints, an explicit synch mechanism is needed to maintain synchronization during transfers. For details about synchronization, see Section 5, “USB Data Flow Model,” in the USB Specification and the relevant parts of the Universal Serial Bus Class Specification. The information carried over the synch path consists of a 3-byte data packet. These three bytes contain the Ff value in a 10.14 format as described in Section 5.10.4.2, “Feedback” of the USB Specification. Ff represents the average number of samples the endpoint must produce or consume per frame to match the desired sampling frequency Fs exactly. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 33 A new Ff value is available every 2(10 – P) ms (frames) where P can range from 1 to 9, inclusive. The sample clock Fs is always derived from a master clock Fm in the device. P is related to the ratio between those clocks through the following relationship 数式 In worst case conditions, only Fs is available and Fm = Fs, giving P = 1 because one can always use phase information to resolve the estimation of Fs within half a clock cycle. An adaptive audio source IN endpoint is accompanied by an associated isochronous synch OUT endpoint that carries Ff. An asynchronous audio sink OUT endpoint is accompanied by an associated isochronous synch IN endpoint. For adaptive IN endpoints and asynchronous OUT endpoints, the standard endpoint descriptor provides the bSynchAddress field to establish a link to the associated synch endpoint. It contains the address of the synch endpoint. The bSynchAddress field of the synch standard endpoint descriptor must be set to zero. As indicated earlier, a new Ff value is available every 2(10 – P) frames with P ranging from 1 to 9. The bRefresh field of the synch standard endpoint descriptor is used to report the exponent (10-P) to the Host. It can range from 9 down to 1. (512 ms down to 2 ms) 3.7.2.3 Audio Channel Cluster Format An audio channel cluster is a grouping of logical audio channels that share the same characteristics like sampling frequency, bit resolution, etc. Channel numbering in the cluster starts with channel one up to the number of channels in the cluster. The virtual channel zero is used to address a master Control in a Unit, effectively influencing all the channels at once. The maximum number of independent channels in an audio channel cluster is limited to 254. Indeed, Channel zero is used to reference the master channel and code 0xFF (255) is used in requests to indicate that the request parameter block holds values for all available addressed Controls. For further details, refer to Section 5.2.2, “AudioControl Requests” and the sections that follow, describing the second form of requests. In many cases, each channel in the audio cluster is also tied to a certain location in the listening space. A trivial example of this is a cluster that contains Left and Right logical audio channels. To be able to describe more complex cases in a manageable fashion, this specification imposes some limitations and restrictions on the ordering of logical channels in an audio channel cluster. There are twelve predefined spatial locations · Left Front (L) · Right Front (R) · Center Front (C) · Low Frequency Enhancement (LFE) [Super woofer] · Left Surround (LS) · Right Surround (RS) · Left of Center (LC) [in front] · Right of Center (RC) [in front] · Surround (S) [rear] · Side Left (SL) [left wall] · Side Right (SR) [right wall] · Top (T) [overhead] If there are logical channels present in the audio channel cluster that correspond to some of the previously defined spatial positions, then they must appear in the order specified in the above list. For instance, if a USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 34 cluster contains logical channels Left, Right and LFE, then channel 1 is Left, channel 2 is Right, and channel 3 is LFE. To characterize an audio channel cluster, a cluster descriptor is introduced. This descriptor is embedded within one of the following descriptors · Input Terminal descriptor · Mixer Unit descriptor · Processing Unit descriptor · Extension Unit descriptor The cluster descriptor contains the following fields · bNrChannels a number that specifies how many logical audio channels are present in the cluster. · wChannelConfig a bit field that indicates which spatial locations are present in the cluster. The bit allocations are as follows § D0 Left Front (L) § D1 Right Front (R) § D2 Center Front (C) § D3 Low Frequency Enhancement (LFE) § D4 Left Surround (LS) § D5 Right Surround (RS) § D6 Left of Center (LC) § D7 Right of Center (RC) § D8 Surround (S) § D9 Side Left (SL) § D10 Side Right (SR) § D11 Top (T) § D15..12 Reserved · Each bit set in this bit map indicates there is a logical channel in the cluster that carries audio information, destined for the indicated spatial location. The channel ordering in the cluster must correspond to the ordering, imposed by the above list of predefined spatial locations. If there are more channels in the cluster than there are bits set in the wChannelConfig field, (i.e. bNrChannels [Number_Of_Bits_Set]), then the first [Number_Of_Bits_Set] channels take the spatial positions, indicated in wChannelConfig. The remaining channels have ‘non-predefined’ spatial positions (positions that do not appear in the predefined list). If none of the bits in wChannelConfig are set, then all channels have non-predefined spatial positions. If one or more channels have non-predefined spatial positions, their spatial location description can optionally be derived from the iChannelNames field. · iChannelNames index to a string descriptor that describes the spatial location of the first nonpredefined logical channel in the cluster. The spatial locations of all remaining logical channels must be described by string descriptors with indices that immediately follow the index of the descriptor of the first non-predefined channel. Therefore, iChannelNames inherently describes an array of string descriptor indices, ranging from iChannelNames to (iChannelNames + (bNrChannels- [Number_Of_Bits_Set]) - 1) Example 1 An audio channel cluster that carries Dolby Prologic logical channels has the following cluster descriptor Table 3-2 Dolby Prologic Cluster Descriptor Offset Field Size Value Description USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 35 Offset Field Size Value Description 0 bNrChannels 1 4 There are 4 logical channels in the cluster. 1 wChannelConfig 2 0x0107 Left, Right, Center and Surround are present. 3 iChannelNames 1 Index Because there are no non-predefined logical channels, this index must be set to 0. Example 2 A hypothetical audio channel cluster inside an audio function could carry Left, Left Surround, Left of Center, and two auxiliary channels that contain each a different weighted mix of the Left, Left Surround and Left of Center channels. The corresponding cluster descriptor would be Table 3-3 Left Group Cluster Descriptor Offset Field Size Value Description 0 bNrChannels 1 5 There are 5 logical channels in the cluster 1 wChannelConfig 2 0x0051 Left, Left Surround, Left of Center and two undefined channels are present. (bNrChannels [Number_Of_Bits_Set]) 3 iChannelNames 1 Index Optional index of the first non-predefined string descriptor Optional string descriptors String (Index) = ‘Left Down Mix 1’ String (Index+1) = ‘Left Down Mix 2’ 3.7.2.4 Audio Data Format The format used to transport audio data over the USB is entirely determined by the code, located in the wFormatTag field of the class-specific interface descriptor. Therefore, each defined Format Tag must document in detail the audio data format it uses. Consequently, format-specific descriptors are needed to fully describe the format. For details about the predefined Format Tags and associated data formats and descriptors, see the separate document, USB Audio Data Formats, that is considered part of this specification. Vendor-specific protocols must be fully documented by the manufacturer. 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 ここを編集
https://w.atwiki.jp/usb_audio/pages/22.html
原文:Audio Data Formats 1.0(PDF) USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 11 Offset Field Size Value Description 8 tLowerSamFreq 3 Number Lower bound in Hz of the sampling frequency range for this isochronous data endpoint. 11 tUpperSamFreq 3 Number Upper bound in Hz of the sampling frequency range for this isochronous data endpoint. Table 2-3 Discrete Number of Sampling Frequencies Offset Field Size Value Description 8 tSamFreq [1] 3 Number Sampling frequency 1 in Hz for this isochronous data endpoint. … … … … … 8+(ns-1)*3 tSamFreq [ns] 3 Number Sampling frequency ns in Hz for this isochronous data endpoint. Note In the case of adaptive isochronous data endpoints that support only a discrete number of sampling frequencies, the endpoint must at least tolerate ±1000 PPM inaccuracy on the reported sampling frequencies. 2.2.6 Supported Formats The following paragraphs list all currently supported Type I Audio Data Formats. 2.2.6.1 PCM Format The PCM (Pulse Coded Modulation) format is the most commonly used audio format to represent audio data streams. The audio data is not compressed and uses a signed two’s-complement fixed point format. It is left-justified (the sign bit is the Msb) and data is padded with trailing zeros to fill the remaining unused bits of the subframe. The binary point is located to the right of the sign bit so that all values lie within the range [-1,+1). 2.2.6.2 PCM8 Format The PCM8 format is introduced to be compatible with the legacy 8-bit wave format. Audio data is uncompressed and uses 8 bits per sample (bBitResolution = 8). In this case, data is unsigned fixed-point, left-justified in the audio subframe, Msb first. The range is [0,255]. 2.2.6.3 IEEE_FLOAT Format The IEEE_FLOAT format is based on the ANSI/IEEE-754 floating-point standard. Audio data is represented using the basic single-precision format. The basic single-precision number is 32 bits wide and has an 8-bit exponent and a 24-bit mantissa. Both mantissa and exponent are signed numbers, but neither is represented in two s-complement format. The mantissa is stored in sign magnitude format and the exponent in biased form (also called excess-n form). In biased form, there is a positive integer (called the bias) which is subtracted from the stored number to get the actual number. For example, in an eight-bit exponent, the bias is 127. To represent 0, the number 127 is stored. To represent -100, 27 is stored. An USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 12 exponent of all zeroes and an exponent of all ones are both reserved for special cases, so in an eight-bit field, exponents of -126 to +127 are possible. In the basic floating-point format, the mantissa is assumed to be normalized so that the most significant bit is always one, and therefore is not stored. Only the fractional part is stored. The 32-bit IEEE-754 floating-point word is broken into three fields. The most significant bit stores the sign of the mantissa, the next group of 8 bits stores the exponent in biased form, and the remaining 23 bits store the magnitude of the fractional portion of the mantissa. For further information, refer to the ANSI/IEEE-754 standard. The data is conveyed over USB using 32 bits per sample (bBitResolution = 32; bSubframeSize = 4). 2.2.6.4 ALaw Format and mLaw Format Starting from 12- or 16-bits linear PCM samples, simple compression down to 8-bits per sample (one byte per sample) can be achieved by using logarithmic companding. The compressed audio data uses 8 bits per sample (bBitsPerSample = 8). Data is signed fixed point, left-justified in the subframe, Msb first. The compressed range is [-128,128]. The difference between Alaw and mLaw compression lies in the formulae used to achieve the compression. Refer to the ITU G.711 standard for further details. 2.3 Type II Formats Type II formats are used to transmit non-PCM encoded audio data into bitstreams that consist of a sequence of encoded audio frames. 2.3.1 Encoded Audio Frames An encoded audio frame is a sequence of bits that contains an encoded representation of one or more physical audio channels. The encoding takes place over a fixed number of audio samples. Each encoded audio frame contains enough information to entirely reconstruct the audio samples (albeit not lossless), encoded in the encoded audio frame. No information from adjacent encoded audio frames is needed during decoding. The number of samples used to construct one encoded audio frame depends on the encoding scheme. (For MPEG, the number of samples per encoded audio frame (nf) is 384 for Layer I or 1152 for Layer II. For AC-3, the number of samples is 1536.) In most cases, the encoded audio frame represents multiple physical audio channels. The number of bits per encoded audio frame may be variable. The content of the encoded audio frame is defined according to the implemented encoding scheme. Where applicable, the bit ordering shall be MSB first, relative to existing standards of serial transmission or storage of that encoding scheme. An encoded audio frame represents an interval longer than the USB frame time of 1 ms. This is typical of audio compression algorithms that use psycho-acoustic or vocal tract parametric models. Note It is important to make a clear distinction between an audio frame (see Section 2.2.3, “Audio Frame”) and an encoded audio frame. The overloaded use of the term audio frame could cause confusion. Therefore, this specification will always use the qualifier ‘encoded’ to refer to MPEG or AC-3 encoded audio frames. 2.3.2 Audio Bitstreams An encoded audio bitstream is a concatenation of a potentially very large number of encoded audio frames, ordered according to ascending time. Subsequent encoded audio frames are independent and can be decoded separately. USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 13 2.3.3 USB Packets Encoded audio bitstreams are packetized when transported over an isochronous pipe. Each USB packet contains only part of a single encoded audio frame. Packet sizes are determined according to the shortpacket protocol. The encoded audio frame is broken down into a number of packets, each containing wMaxPacketSize bytes except for the last packet, which may be smaller and contains the remainder of the encoded audio frame. If the MaxPacketsOnly bit D7 in the bmAttributes field of the class-specific endpoint descriptor is set, the last (short) packet must be padded with zero bytes to wMaxPacketSize length. No USB packet may contain bits belonging to different encoded audio frames. If the encoded audio frame length is not a multiple of 8 bits, the last byte in the last packet is padded with zero bits. The decoder must ignore all padded extra bits and bytes. Consecutive encoded audio frames are separated by at least one Transfer Delimiter. A Transfer Delimiter must be sent in all consecutive USB frames until the next encoded audio frame is due. The above rules guarantee that a new encoded audio frame always starts on a USB packet boundary. 2.3.4 Bandwidth Allocation The encoded audio frame time tf equals the number of audio samples per encoded audio frame nf divided by the sampling rate fs of the original audio samples. 数式 The allocated bandwidth for the pipe must accommodate for the largest possible encoded audio frame to be transmitted within an encoded audio frame time. This should take into account the Transfer Delimiter requirement and any differences between the time base of the stream and the USB frame timer. The device may choose to consume more bandwidth than necessary (by increasing the reported wMaxPacketSize) to minimize the time needed to transmit an entire encoded audio frame. This can be used to enable early decoding and therefore minimize system latency. 2.3.5 Timing The timing reference point is the beginning of an encoded audio frame. Therefore, the USB packet that contains the first bits (usually the encoded audio frame sync word) of the encoded audio frame is used as a timing reference in USB space. This USB packet is called the reference packet. The transmission of the reference packet of an encoded audio frame should begin at the target playback time of that frame (minus the endpoint’s reported delay) rounded to the nearest USB frame time. This guarantees that, at the receiving end, the arrival of subsequent reference packets matches the encoded audio frame time tf as closely as possible. 2.3.6 Type II Format Type Descriptor The Type II Format Type descriptor starts with the usual three fields bLength, bDescriptorType and bDescriptorSubtype. The bFormatType field indicates this is a Type II descriptor. The wMaxBitRate field contains the maximum number of bits per second this interface can handle. It is a measure for the buffer size available in the interface. The wSamplesPerFrame field contains the number of non-PCM encoded audio samples contained within a single encoded audio frame The sampling frequency capabilities of the endpoint are reported using the bSamFreqType field andfollowing fields. Table 2-4 Type II Format Type Descriptor Offset Field Size Value Description USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 14 Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 9+(ns*3) 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant FORMAT_TYPE_II. Constant identifying the Format Type the AudioStreaming interface is using. 4 wMaxBitRate 2 Number Indicates the maximum number of bits per second this interface can handle. Expressed in kbits/s. 6 wSamplesPerFrame 2 Number Indicates the number of PCM audio samples contained in one encoded audio frame. 8 bSamFreqType 1 Number Indicates how the sampling frequency can be programmed 0 Continuous sampling frequency1..255 The number of discrete sampling frequencies supported by the isochronous data endpoint of the AudioStreaming interface (ns) 9... See sampling frequency tables, below. Depending on the value in the bSamFreqType field, the layout of the next part of the descriptor is as shown in the following tables. Table 2-5 Continuous Sampling Frequency Offset Field Size Value Description 9 tLowerSamFreq 3 Number Lower bound in Hz of the sampling frequency range for this isochronous data endpoint. 12 tUpperSamFreq 3 Number Upper bound in Hz of the sampling frequency range for this isochronous data endpoint. Table 2-6 Discrete Number of Sampling Frequencies Offset Field Size Value Description 9 tSamFreq [1] 3 Number Sampling frequency 1 in Hz for this isochronous data endpoint. … … … … … USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 15 Offset Field Size Value Description 9+(ns-1)*3 tSamFreq [ns] 3 Number Sampling frequency ns in Hz for this isochronous data endpoint. Note In the case of adaptive isochronous data endpoints that support only a discrete number of sampling frequencies, the endpoint must at least tolerate ±1000 PPM inaccuracy on the reported sampling frequencies. 2.3.7 Rate feedback If the isochronous data endpoint needs explicit rate feedback (adaptive source, asynchronous sink), the feedback pipe shall report the number of equivalent PCM audio samples. The host will accumulate this data and start transmission of an encoded audio frame whenever the current number of samples exceeds the number of samples per encoded audio frame. The remainder is kept in the accumulator. 2.3.8 Supported Formats The following sections list all currently supported Type II Audio Data Formats. Format-specific descriptors and format-specific requests are explained in more detail. 2.3.8.1 MPEG Format In the current specification, only MPEG decoding aspects are considered. Real-time MPEG encoding peripherals are not (yet) available and consequently are not covered by this specification. 2.3.8.1.1 MPEG Format-Specific Descriptor The wFormatTag field is a duplicate of the wFormatTag field in the class-specific AudioStreaming interface descriptor. The same field is used here to identify the format-specific descriptor. The bmMPEGCapabilities bitmap field describes the capabilities of the MPEG decoder built into the AudioStreaming interface. Some general information must be retrieved from the Format Type-specific descriptor. For instance, the sampling frequencies supported by the decoder are reported through the Format Type-specific descriptor. This includes the ability of the decoder to handle low sampling frequencies (16 kHz, 22.05 kHz and 24 kHz) besides the standard 32 kHz, 44.1 kHz and 48 kHz sampling frequencies. Bits D2..0 of the bmMPEGCapabilities field are used to indicate which layers this decoder is capable of processing. The different layers relate to the different algorithms that are used during encoding and decoding. Bit D3 indicates that the decoder can only process the MPEG-1 base stream. Therefore, only Left and Right channels will be output. Bit D4 indicates that the decoder can handle MPEG-2 streams that contain two independent stereo pairs instead of the normal 3/2 encoding scheme. This bit is only applicable for MPEG-2 decoders. Bit D5 indicates that the decoder supports the MPEG dual channel mode. In this case, the MPEG-1 base stream does not contain Left and Right channels of a stereo pair but instead contains two independent mono channels. One of these channels can be selected through the proper request (Dual Channel Control) and reproduced over the Left and Right output channels simultaneously. Bit D6 indicates that the decoder supports the DVD MPEG-2 augmentation to 7.1 channels instead of the standard 5.1 channels. 1 - 6 - 11 - 16 - 21 - 26 - 31 ここを編集
https://w.atwiki.jp/usb_audio/pages/32.html
原文:Audio Device Document 1.0(PDF) USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 21 · Resolution attribute As an example, consider a Volume Control inside a Feature Unit. By issuing the appropriate Get requests, the Host software can obtain values for the Volume Control’s attributes and, for instance, use them to correctly display the Control on the screen. Setting the Volume Control’s current attribute allows the Host software to change the volume setting of the Volume Control. Additionally, each Entity (Unit or Terminal) in an audio function can have a memory space attribute. This attribute optionally provides generic access to the internal memory space of the Entity. This could be used to implement vendor-specific control of an Entity through generically provided access. 3.5.1 Input Terminal The Input Terminal (IT) is used to interface between the audio function’s ‘outside world’ and other Units in the audio function. It serves as a receptacle for audio information flowing into the audio function. Its function is to represent a source of incoming audio data after this data has been properly extracted from the original audio stream into the separate logical channels that are embedded in this stream (the decoding process). The logical channels are grouped into an audio channel cluster and leave the Input Terminal through a single Output Pin. An Input Terminal can represent inputs to the audio function other than USB OUT endpoints. A Line-In connector on an audio device is an example of such a non-USB input. However, if the audio stream is entering the audio function by means of a USB OUT endpoint, there is a one-to-one relationship between that endpoint and its associated Input Terminal. The class-specific endpoint descriptor contains a field that holds a direct reference to this Input Terminal. The Host needs to use both the endpoint descriptors and the Input Terminal descriptor to get a full understanding of the characteristics and capabilities of the Input Terminal. Stream-related parameters are stored in the endpoint descriptors. Control-related parameters are stored in the Terminal descriptor. The conversion process from incoming, possibly encoded audio streams to logical audio channels always involves some kind of decoding engine. This specification defines several types of decoding. These decoding types range from rather trivial decoding schemes like converting interleaved stereo 16 bit PCM data into a Left and Right logical channel to very sophisticated schemes like converting an MPEG-2 7.1 encoded audio stream into Left, Left Center, Center, Right Center, Right, Right Surround, Left Surround and Low Frequency Enhancement logical channels. The decoding engine is considered part of the Entity that actually receives the encoded audio data streams (like a USB AudioStreaming interface). The type of decoding is therefore implied in the wFormatTag value, located in the AudioStreaming interface descriptor. Requests specific to the decoding engine must be directed to the AudioStreaming interface. The associated Input Terminal deals with the logical channels after they have been decoded. The symbol for the Input Terminal is depicted in the following figure ここに画像 Figure 3-1 Input Terminal Icon 3.5.2 Output Terminal The Output Terminal (OT) is used to interface between Units inside the audio function and the ‘outside world’. It serves as an outlet for audio information, flowing out of the audio function. Its function is to represent a sink of outgoing audio data before this data is properly packed from the original separate logical channels into the outgoing audio stream (the encoding process). The audio channel cluster enters the Output Terminal through a single Input Pin. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 22 An Output Terminal can represent outputs from the audio function other than USB IN endpoints. A speaker built into an audio device or a Line Out connector is an example of such a non-USB output. However, if the audio stream is leaving the audio function by means of a USB IN endpoint, there is a oneto- one relationship between that endpoint and its associated Output Terminal. The class-specific endpoint descriptor contains a field that holds a direct reference to this Output Terminal. The Host needs to use both the endpoint descriptors and the Output Terminal descriptor to fully understand the characteristics and capabilities of the Output Terminal. Stream-related parameters are stored in the endpoint descriptors. Control-related parameters are stored in the Terminal descriptor. The conversion process from incoming logical audio channels to possibly encoded audio streams always involves some kind of encoding engine. This specification defines several types of encoding, ranging from rather trivial to very sophisticated schemes. The encoding engine is considered part of the Entity that actually transmits the encoded audio data streams (like a USB AudioStreaming interface). The type of encoding is therefore implied in the wFormatTag value, located in the AudioStreaming interface descriptor. Requests specific to the encoding engine must be directed to the AudioStreaming interface. The associated Output Terminal deals with the logical channels before encoding. The symbol for the Output Terminal is depicted in the following figure ここに画像 Figure 3-2 Output Terminal Icon 3.5.3 Mixer Unit The Mixer Unit (MU) transforms a number of logical input channels into a number of logical output channels. The input channels are grouped into one or more audio channel clusters. Each cluster enters the Mixer Unit through an Input Pin. The logical output channels are grouped into one audio channel cluster and leave the Mixer Unit through a single Output Pin. Every input channel can virtually be mixed into all of the output channels. If n is the total number of input channels and m is the number of output channels, then there are n x m mixing Controls in the Mixer Unit. Not all of these Controls have to be physically implemented. Some Controls can have a fixed setting and be non-programmable. The Mixer Unit Descriptor reports which Controls are programmable in the bmControls bitmap field. Using this model, a permanent connection can be implemented by reporting the Control as non-programmable and by returning a Control setting of 0 dB when requested. Likewise, a missing connection can be implemented by reporting the Control as non-programmable and by returning a Control setting of -¥ dB. The symbol for the Mixer Unit can be found in the following figure ここに画像 Figure 3-3 Mixer Unit Icon 3.5.4 Selector Unit The Selector Unit (SU) selects from n audio channel clusters, each containing m logical input channels and routes them unaltered to the single output audio channel cluster, containing m output channels. It USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 23 represents a multi-channel source selector, capable of selecting between n m-channel sources. It has n Input Pins and a single Output Pin. The symbol for the Selector Unit can be found in the following figure ここに画像 Figure 3-4 Selector Unit Icon 3.5.5 Feature Unit The Feature Unit (FU) is essentially a multi-channel processing unit that provides basic manipulation of the incoming logical channels. For each logical channel, the Feature Unit optionally provides audio Controls for the following features · Volume · Mute · Tone Control (Bass, Mid, Treble) · Graphic Equalizer · Automatic Gain Control · Delay · Bass Boost · Loudness In addition, the Feature Unit optionally provides the above audio Controls but now influencing all channels of the cluster at once. In this way, ‘master’ Controls can be implemented. The master Controls are cascaded after the individual channel Controls. This setup is especially useful in multi-channel systems where the individual channel Controls can be used for channel balancing and the master Controls can be used for overall settings. The logical channels in the cluster are numbered from one to the total number of channels in the cluster. The ‘master’ channel has channel number zero and is always virtually present. The Feature Unit Descriptor reports which Controls are present for every channel in the Feature Unit and for the ‘master’ channel. All logical channels in a Feature Unit are fully independent. There exist no cross couplings among channels within the Feature Unit. There are as many logical output channels, as there are input channels. These are grouped into one audio channel cluster that enters the Feature Unit through a single Input Pin and leaves the Unit through a single Output Pin. The symbol for the Feature Unit is depicted in the following figure ここに画像 Figure 3-5 Feature Unit Icon 3.5.6 Processing Unit The Processing Unit (PU) represents a functional block inside the audio function that transforms a number of logical input channels, grouped into one or more audio channel clusters into a number of logical output channels, grouped into one audio channel cluster. Therefore, the Processing Unit can have multiple Input USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 24 Pins and has a single Output Pin. This specification defines several standard transforms (algorithms) that are considered necessary to support additional audio functionality; these transforms are not covered by the other Unit types but are commonplace enough to be included in this specification so that a generic driver can provide control for it. Processing Units are encouraged to support at least the Enable Processing Control, allowing the Host software to bypass whatever functionality is incorporated in the Processing Unit. 3.5.6.1 Up/Down-mix Processing Unit The Up/Down-mix Processing Unit provides facilities to derive m output audio channels from n input audio channels. The algorithms and transforms applied to accomplish this are not defined by this specification and can be proprietary. The input channels are grouped into one input channel cluster that enters the Processing Unit over a single Input Pin. Likewise, all output channels are grouped into one output channel cluster, leaving the Processing Unit over a single Output Pin. The Up/Down-mix Processing Unit can support multiple modes of operation (besides the bypass mode, controlled by the Enable Processing Control). The available input audio channels are dictated by the Unit or Terminal to which the Up/Down-mix Processing Unit is connected. The Up/Down-mix Processing Unit descriptor reports which up/down-mixing modes the Unit supports through its waModes() array. Each element of the waModes() array indicates which output channels in the output cluster are effectively used in a particular mode. The unused output channels in the output cluster must produce muted output. Mode selection is implemented using the Get/Set Control request. As an example, consider the case where an Up/Down-mix Processing Unit is connected to an Input Terminal, producing DolbyÔ AC-3 5.1 decoded audio. The input audio channel cluster to the Up/Downmix Processing Unit therefore contains Left, Right, Center, Left Surround, Right Surround and LFE logical channels. Suppose the audio function’s hardware is limited to reproducing only dual channel audio. Then the Up/Down-mix Processing Unit could use some (sophisticated) algorithms to down-mix the available spatial audio information into two (‘enriched’) channels so that the maximum spatial effects can be experienced, using only two channels. It is left to the audio function’s discretion to use the appropriate down-mix algorithm depending on the physical nature of the Output Terminal to which the Up/Down-mix Processing Unit is routed. For instance, a different down-mix algorithm is needed whether the ‘enriched’ stereo stream is sent to a pair of speakers or to a headphone set. However, this knowledge already resides within the audio function and deciding which down-mix algorithm to use does not need Host intervention. As a second interesting example, suppose the hardware is capable of servicing eight discrete audio channels for instance a full-fledged MPEG-2 7.1 system. Now the Up/Down-mix Processing Unit could use certain techniques to derive meaningful content for the extra audio channels (Left of Center, Right of Center) that are present in the output cluster and are missing in the input channel cluster (AC-3 5.1). This is a typical example of an up-mix situation. The symbol for the Up/Down-mix Processing Unit is depicted in the following figure ここに画像 Figure 3-6 Up/Down-mix Processing Unit Icon 3.5.6.2 Dolby Prologic Processing Unit The Dolby PrologicÔ decoding process can be seen as an operator on the Left and Right logical channels of the input cluster of the Unit. It is capable of extracting additional audio data (Center and/or Surround USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 25 channels) from information that is transparently ‘superimposed’ on the Left and Right audio channels. It therefore differs from a true decoding process as defined for an Input Terminal. It can be applied on a logical audio stream anywhere in the audio function. The Dolby Prologic Processing Unit is a specialized derivative of the Up/Down-mix Processing Unit. The Dolby Prologic Processing Unit can have the following modes of operation (besides the bypass mode, controlled by the Enable Processing Control) · Left, Right, Center channel decoding · Left, Right, Surround channel decoding · Left, Right, Center, Surround decoding The Dolby Prologic Processing Unit descriptor reports which modes the Unit supports. Mode selection is then implemented using the Get/Set Control request. Dolby Prologic Surround Delay Control is considered not to be part of the Dolby PrologicÔ Processing Unit and must be handled by a separate Feature Unit. Dolby Prologic Bass Management is the local responsibility of the audio function and should not be controllable from the Host. The symbol for the Dolby Prologic Processing Unit can be found in the following picture ここに画像 Figure 3-7 Dolby Prologic Processing Unit Icon 3.5.6.3 3D-Stereo Extender Processing Unit The 3D-Stereo Extender Processing Unit operates on Left and Right channels only. It processes an existing stereo (two channel) soundtrack to add spaciousness and to make it appear to originate from outside the Left/Right speaker locations. Extended stereo effects can be achieved via various, straightforward methods. The algorithms and transforms applied to accomplish this are not defined by this specification and can be proprietary. The effects of the 3D-Stereo Extender Processing Unit can be bypassed at all times through manipulation of the Enable Processing Control. The size of the listening area (area in which the listener has to be placed with respect to speakers to hear the effect, also called sweet spot) can be controlled using the proper Get/Set Control request. The symbol for the 3D-Stereo Extender Unit is depicted in the following figure ここに画像 Figure 3-8 3D-Stereo Extender Processing Unit Icon 3.5.6.4 Reverberation Processing Unit The Reverberation Processing Unit is used to add room acoustics effects to the original audio information. These effects can range from small room reverberation effects to simulation of a large concert hall reverberation. A number of parameters can be manipulated to obtain the desired reverberation effects. · Reverb Type Room1, Room2, Room3, Hall1, Hall2, Plate, Delay, and Panning Delay. 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 ここを編集
https://w.atwiki.jp/usb_audio/pages/33.html
原文:Audio Device Document 1.0(PDF) USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 26 · Reverb Level sets the amount of reverberant sound. · Reverb Time sets the time over which the reverberation will continue. · Reverb Delay Feedback used with Reverb Types Delay and Delay Panning. Sets the way in which delay repeats The effects of the Reverberation Processing Unit can be bypassed at all times through manipulation of the Enable Processing Control. In principle, the algorithm to produce the desired reverberation effect influences all channels as a whole. It is entirely left to the designer how a certain reverberation effect is obtained. It is not the intention of this specification to precisely define all the parameters that influence the reverberation experience (for instance in a multi-channel system, it is possible to create very similar reverberation impressions, using different algorithms and parameter settings on all channels). The symbol for the Reverberation Processing Unit can be found in the following figure ここに画像 Figure 3-9 Reverberation Processing Unit Icon 3.5.6.5 Chorus Processing Unit The Chorus Processing Unit is used to add chorus effects to the original audio information. A number of parameters can be manipulated to obtain the desired chorus effects. · Chorus Level controls the amount of the effect sound of chorus. · Chorus Modulation Rate sets the speed (frequency) of the modulator of the chorus. · Chorus Modulation Depth sets the depth at which the chorus sound is modulated. The effects of the Chorus Processing Unit can be bypassed at all times through manipulation of the Enable Processing Control. In principle, the algorithm to produce the desired chorus effect influences all channels as a whole. It is entirely left to the designer how a certain chorus effect is obtained. It is not the intention of this specification to precisely define all the parameters that influence the chorus experience. The symbol for the Chorus Processing Unit can be found in the following figure ここに画像 Figure 3-10 Chorus Processing Unit Icon 3.5.6.6 Dynamic Range Compressor Processing Unit The Dynamic Range Compressor Processing Unit is used to intelligently limit the dynamic range of the original audio information. A number of parameters can be manipulated to influence the desired compression. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 27 ここに画像 Figure 3-11 Dynamic Range Compressor Transfer Characteristic · Compression ratio R determines the slope of the static input-to-output transfer characteristic in the compressor’s active input range. The compression is defined in terms of the compression ratio R, which is the inverse of the derivative of the output power PO as a function of the input power PI when PO and PI are expressed in dB. 数式 PR is the reference level and it is made equal to the so-called line level. All levels are expressed relative to the line level (0 dB), which is usually 15-20 dB below the maximum level. Compression is obtained when R 1, R = 1 does not affect the signal and R 1 gives rise to expansion. · Maximum Amplitude the upper boundary of the active input range, relative to the line level (0 dB). Expressed in dB. · Threshold level the lower boundary of the active input level, relative to the line level (0 dB). · Attack Time determines the response of the compressor as a function of time to a step in the input level. Expressed in ms. · Release Time relates to the recovery time of the gain of the compressor after a loud passage. Expressed in ms. The effects of the Dynamic Range Compressor Processing Unit can be bypassed at all times through manipulation of the Enable Processing Control. In principle, the algorithm to produce the desired dynamic range compression influences all channels as a whole. It is entirely left to the designer how a certain dynamic range compression is obtained. The symbol for the Dynamic Range Compressor Processing Unit can be found in the following figure ここに画像 {Figure 3-12 Dynamic Range Compressor Processing Unit Icon}} USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 28 3.5.7 Extension Unit The Extension Unit (XU) is the method provided by this specification to easily add vendor-specific building blocks to the specification. The Extension Unit provides one or more logical input channels, grouped into one or more audio channel clusters and transforms them into a number of logical output channels, grouped into one audio channel cluster. Therefore, the Extension Unit can have multiple Input Pins and has a single Output Pin. Extension Units are required to support at least the Enable Processing Control, allowing the Host software to bypass whatever functionality is incorporated in the Extension Unit. Although a generic audio driver will not be able to determine what functionality is implemented in the Extension Unit, let alone manipulate it, it still will be capable of recognizing the presence of vendorspecific extensions and assume default behavior for those units. The symbol for the Extension Unit can be found in the following figure ここに画像 {Figure 3-13 Extension Unit Icon 3.5.8 Associated Interfaces In some cases, an audio function building block (Terminal, Mixer Unit, Feature Unit, and so on) needs to be associated with interfaces that are not part of the Audio Interface Collection. As an example, consider a speaker system with front-panel volume knob. The manufacturer might want to impose a binding between the front-panel volume Control and the speaker system’s volume setting. The volume knob could be represented by a HID interface that coexists with the Audio Interface Collection. To create a binding between the Feature Unit inside the audio function that deals with master Volume Control and the frontpanel volume knob, the Feature Unit descriptor can be supplemented by a special Associated Interface descriptor that holds a link to the associated HID interface. In general, each Terminal or Unit descriptor can be supplemented by one or more optional Associated Interface descriptors that hold a reference to an interface. This interface is external to the audio function and interacts in a certain way with the Terminal or Unit. The layout of the Associated Interface descriptor is open-ended and is qualified by the Entity type it succeeds and by the target interface Class type it references. For the time being, this specification does not define any specific Associated Interface descriptor layout. 3.6 Copy Protection Because the Audio Device Class is primarily dealing with digital audio streams, the issue of protecting these – often-copyrighted – streams can not be ignored. Therefore, this specification provides the means to preserve whatever copyright information is available. However, it is the responsibility of the Host software to manage the flow of copy protection information throughout the audio function. Copy protection issues come into play whenever digital audio streams enter or leave the audio function. Therefore, the copy protection mechanism is implemented at the Terminal level in the audio function. Streams entering the audio function can be accompanied by specific information, describing the copy protection level of that audio stream. Likewise, streams leaving the audio function should be accompanied by the appropriate copy protection information, if the hardware permits it. This specification provides for two dedicated requests that can be used to manage the copy protection mechanism. The Get Copy Protect USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 29 request can be used to retrieve copy protection information from an Input Terminal whereas the Set Copy Protect request is used to preset the copy protection level of an Output Terminal. This specification provides for three levels of copy permission, similar to CGMS (Copy Generation Management System) and SCMS (Serial Copy Management System). · Level 0 Copying is permitted without restriction. The material is either not copyrighted, or the copyright is not asserted. · Level 1 One generation of copies may be made. The material is copyright protected and is the original. · Level 2 The material is copyright protected and no digital copying is permitted. 3.7 Operational Model A device can support multiple configurations. Within each configuration can be multiple interfaces, each possibly having alternate settings. These interfaces can pertain to different functions that co-reside in the same composite device. Even several independent audio functions can exist in the same device. Interfaces, belonging to the same audio function are grouped into an Audio Interface Collection. If the device contains multiple independent audio functions, there must be multiple Audio Interface Collections, each providing full access to their associated audio function. As an example of a composite device, consider a PC monitor equipped with a built-in stereo speaker system. Such a device could be configured to have one interface dealing with configuration and control of the monitor part of the device (HID Class), while a Collection of two other interfaces deals with its audio aspects. One of those, the AudioControl interface, is used to control the inner workings of the function (Volume Control etc.) whereas the other, the AudioStreaming interface, handles the data traffic, sent to the monitor’s audio subsystem. The AudioStreaming interface could be configured to operate in mono mode (alternate setting x) in which only a single channel data stream is sent to the audio function. The receiving Input Terminal could duplicate this audio stream into two logical channels, and those could then be reproduced on both speakers. From an interface point of view, such a setup requires one isochronous endpoint in the AudioStreaming interface to receive the mono audio data stream, in addition to the mandatory control endpoint and optional interrupt endpoint in the AudioControl interface. The same system could be used to play back stereo audio. In this case, the stereo AudioStreaming interface must be selected (alternate setting y). This interface also consists of a single isochronous endpoint, now receiving a data stream that interleaves left and right channel samples. The receiving Input Terminal now splits the stream into a Left and Right logical channel. The AudioControl interface remains unchanged. If the above AudioStreaming interface were an asynchronous sink, one extra isochronous synch endpoint would also be necessary. Audio Interface Collections can be dynamic. Because the AudioControl interface, together with its associated AudioStreaming interface(s), constitute the ‘logical interface’ to the audio function, they must all come into existence at the same moment in time. As stated earlier, audio functionality is located at the interface level in the device class hierarchy. The following sections describe the Audio Interface Collection, containing a single AudioControl interface and optional AudioStreaming interfaces, together with their associated endpoints that are used for audio function control and for audio data stream transfer. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 30 3.7.1 AudioControl Interface To control the functional behavior of a particular audio function, the Host can manipulate the Units and Terminals inside the audio function. To make these objects accessible, the audio function must expose a single AudioControl interface. This interface can contain the following endpoints · A control endpoint for manipulating Unit and Terminal settings and retrieving the state of the audio function. This endpoint is mandatory, and the default endpoint 0 is used for this purpose. · An interrupt endpoint for status returns. This endpoint is optional. The AudioControl interface is the single entry point to access the internals of the audio function. All requests that are concerned with the manipulation of certain audio Controls within the audio function’s Units or Terminals must be directed to the AudioControl interface of the audio function. Likewise, all descriptors related to the internals of the audio function are part of the class-specific AudioControl interface descriptor. The AudioControl interface of an audio function may support multiple alternate settings. Alternate settings of the AudioControl interface could for instance be used to implement audio functions that support multiple topologies by presenting different class-specific AudioControl interface descriptors for each alternate setting. 3.7.1.1 Control Endpoint The audio interface class uses endpoint 0 (the default pipe) as the standard way to control the audio function using class-specific requests. These requests are always directed to one of the Units or Terminals that make up the audio function. The format and contents of these requests are detailed further in this document. 3.7.1.2 Status Interrupt Endpoint A USB AudioControl interface can support an optional interrupt endpoint to inform the Host about the status the status of the different addressable Entities (Terminals, Units, interfaces and endpoints) inside the audio function. In fact, the interrupt endpoint is used by the entire Audio Interface Collection to convey status information to the Host. It is considered part of the AudioControl interface because this is the anchor interface for the Collection. The interrupt data is a 2-byte entity. The bStatusType field contains information in D7 indicating whether there is still an interrupt pending or not. This bit remains set until all pending interrupts are properly serviced. The other bits are used to report the cause of the interrupt in more detail. Bit D6 of the bStatusType field indicates a change in memory contents on one of the addressable Entities inside the audio function. This bit is cleared by a Get Memory request on the appropriate Entity. Bits D3..0 indicate the originator of the current interrupt. All addressable Entities inside an audio function can be originator. The contents of the bOriginator field must be interpreted according to the code in D3..0 of the bStatusType field. If the originator is the AudioControl interface, the bOriginator field contains the TerminalID or UnitID of the Entity that caused the interrupt to occur. If the bOriginator field is set to zero, the ‘virtual’ Entity interface is the originator. This can be used to report global AudioControl interface changes to the Host. If the originator is an AudioStreaming interface, the bOriginator field contains the interface number of the AudioStreaming interface. Likewise, it contains the endpoint number if the originator were an AudioStreaming endpoint. The proper response to an interrupt is either a Get Status request (D6=0) or a Get Memory request (D6=1). Issuing these requests to the appropriate originator must clear the Interrupt Pending bit and the Memory Contents Changed bit, if applicable. The following table specifies the format of the status word 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 ここを編集
https://w.atwiki.jp/usb_audio/pages/41.html
原文:Audio Device Document 1.0(PDF) USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 61 4.6.1 AS Isochronous Audio Data Endpoint Descriptors The standard and class-specific audio data endpoint descriptors provide pertinent information on how audio data streams are communicated to the audio function. In addition, specific endpoint capabilities and properties are reported. 4.6.1.1 Standard AS Isochronous Audio Data Endpoint Descriptor The standard AS isochronous audio data endpoint descriptor is identical to the standard endpoint descriptor defined in Section 9.6.4, “Endpoint,” of the USB Specification and further expanded as defined in the Universal Serial Bus Class Specification. D7 of the bEndpointAddress field indicates whether the endpoint is an audio source (D7 = 1) or an audio sink (D7 = 0). The bmAttributes Field bits are set to reflect the isochronous type of the endpoint. The synchronization type is indicated by D3..2 and must be set to Asynchronous, Adaptive or Synchronous. For further details, refer to Section 5.10.4.1, “Synchronous Type,” of the USB Specification. Table 4-20 Standard AS Isochronous Audio Data Endpoint Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 9 1 bDescriptorType 1 Constant ENDPOINT descriptor type 2 bEndpointAddress 1 Endpoint The address of the endpoint on the USB device described by this descriptor. The address is encoded as follows D7 Direction. 0 = OUT endpoint 1 = IN endpoint D6..4 Reserved, reset to zero D3..0 The endpoint number, determined by the designer. 3 bmAttributes 1 Bit Map D3..2 Synchronization type 01 = Asynchronous 10 = Adaptive 11 = Synchronous D1..0 Transfer type 01 = Isochronous All other bits are reserved. 4 wMaxPacketSize 2 Number Maximum packet size this endpoint is capable of sending or receiving when this configuration is selected. This is determined by the audio bandwidth constraints of the endpoint. 6 bInterval 1 Number Interval for polling endpoint for data transfers expressed in milliseconds. Must be set to 1. 7 bRefresh 1 Number Reset to 0. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 62 Offset Field Size Value Description 8 bSynchAddress 1 Endpoint The address of the endpoint used to communicate synchronization information if required by this endpoint. Reset to zero if no synchronization pipe is used. 4.6.1.2 Class-Specific AS Isochronous Audio Data Endpoint Descriptor The bmAttributes field indicates which endpoint-specific Controls this endpoint supports through bits D6..0. Bit D7 is reserved to indicate whether the endpoint always needs USB packets of wMaxPacketSize length (D7 = 1) or that it can handle short packets (D7 = 0). In any case, the endpoint is required to support null packets. This bit must be used by the Host software to determine if the driver should pad all potential short packets (except null packets) with zero bytes to wMaxPacketSize length before sending them to an OUT endpoint. Likewise, when receiving data from an IN endpoint, the Host software must be prepared to receive more bytes than expected and discard the superfluous zero bytes. The bLockDelayUnits and wLockDelay fields are used to indicate to the Host how long it takes for the clock recovery circuitry of this endpoint to lock and reliably produce or consume the audio data stream. This information can be used by the Host to take appropriate action so that no meaningful data gets lost during the locking period. (for instance, sending digital silence during lock period) Depending on the implementation, the locking period can be a fixed amount of time or can be proportional to the sampling frequency. In this case, it usually takes a fixed amount of samples to become locked. To accommodate both cases, the bLockDelayUnits field indicates whether the wLockDelay field is expressed in time (milliseconds) or number of samples. Note Some implementations may use locking strategies that do not lead to either fixed time or fixed number of samples lock delay. In this case, a worst case value can be reported back to the Host. The bLockDelayUnits and wLockDelay fields are only applicable for synchronous and adaptive endpoints. For asynchronous endpoints, the clock is generated internally in the audio function and is completely independent. In this case, bLockDelayUnits and wLockDelay must be set to zero. Table 4-21 Class-Specific AS Isochronous Audio Data Endpoint Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 7 1 bDescriptorType 1 Constant CS_ENDPOINT descriptor type. 2 bDescriptorSubtype 1 Constant EP_GENERAL descriptor subtype. 3 bmAttributes 1 Bit Map A bit in the range D6..0 set to 1 indicates that the mentioned Control is supported by this endpoint. D0 Sampling Frequency D1 Pitch D6..2 Reserved Bit D7 indicates a requirement for wMaxPacketSize packets. D7 MaxPacketsOnly USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 63 Offset Field Size Value Description 4 bLockDelayUnits 1 Number Indicates the units used for the wLockDelay field 0 Undefined 1 Milliseconds 2 Decoded PCM samples 3..255 Reserved 5 wLockDelay Number Indicates the time it takes this endpoint to reliably lock its internal clock recovery circuitry. Units used depend on the value of the bLockDelayUnits field. 4.6.2 AS Isochronous Synch Endpoint Descriptor This descriptor is present only when one or more isochronous audio data endpoints of the adaptive source type or the asynchronous sink type are implemented. 4.6.2.1 Standard AS Isochronous Synch Endpoint Descriptor The isochronous synch endpoint descriptor is identical to the standard endpoint descriptor defined in Section 9.6.4, “Endpoint,” of the USB Specification and further expanded as defined in the Universal Serial Bus Class Specification. The bmAttributes field bits are set to reflect the isochronous type and synchronization type of the endpoint. Table 4-22 Standard AS Isochronous Synch Endpoint Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 9 1 bDescriptorType 1 Constant ENDPOINT descriptor type. 2 bEndpointAddress 1 Endpoint The address of the endpoint on the USB device described by this descriptor. The address is encoded as follows D7 Direction. 0 = OUT endpoint for sources 1 = IN endpoint for sinks D6..4 Reserved, reset to zero D3..0 The endpoint number, determined by the designer. 3 bmAttributes 1 Bit Map D3..2 Synchronization type 00 = None D1..0 Transfer type 01 = Isochronous All other bits are reserved. 4 wMaxPacketSize 2 Number Maximum packet size this endpoint is capable of sending or receiving when this configuration is selected. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 64 Offset Field Size Value Description 6 bInterval 1 Number Interval for polling endpoint for data transfers expressed in milliseconds. Must be set to 1. 7 bRefresh 1 Number This field indicates the rate at which an isochronous synchronization pipe provides new synchronization feedback data. This rate must be a power of 2, therefore only the power is reported back and the range of this field is from 1 (2 ms) to 9 (512 ms). 8 bSynchAddress 1 Endpoint Must be reset to zero. 4.6.2.2 Class-Specific AS Isochronous Synch Endpoint Descriptor There is no class-specific AS isochronous synch endpoint descriptor. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 65 5 Requests 5.1 Standard Requests The Audio Device Class supports the standard requests described in Section 9, “USB Device Framework,” of the USB Specification. The Audio Device Class places no specific requirements on the values for the standard requests. 5.2 Class-Specific Requests Most class-specific requests are used to set and get audio related Controls. These Controls fall into two main groups those that manipulate the audio function Controls, such as volume, tone, selector position, etc. and those that influence data transfer over an isochronous endpoint, such as the current sampling frequency. · AudioControl Requests. Control of an audio function is performed through the manipulation of the attributes of individual Controls that are embedded in the Units of the audio function. The classspecific AudioControl interface descriptor contains a collection of Unit descriptors, each indicating which Controls are present in every Unit. AudioControl requests are always directed to the single AudioControl interface of the audio function. The request contains enough information (Unit ID, Channel Number, and Control Selector) for the audio function to decide to where a specific request must be routed. The same request layout can be used for vendor-specific requests to Extension Units. However, they are not covered by this specification. · AudioStreaming Requests. Control of the class-specific behavior of an AudioStreaming interface is performed through manipulation of either interface Controls or endpoint Controls. These can be either standard Controls, as defined in this specification or vendor-specific. In either case, the same request layout can be used. AudioStreaming requests are directed to the recipient where the Control resides. This can be either the interface or its associated isochronous endpoint. The Audio Device Class supports additional class-specific request · Memory Requests. Every addressable Entity in the audio function (Terminal, Unit, and endpoint) can expose a memory-mapped interface that provides the means to generically manipulate the Entity. Vendor-specific Control implementations could be based on this type of request. · The Get Status request is a general query to an Entity in the AudioControl interface or one of the Audio Streaming interfaces and does not manipulate Controls. In principle, all requests are optional. If an audio function does not support a certain request, it must indicate this by stalling the control pipe when that request is issued to the function. However, if a certain Set request is supported, the associated Get request must also be supported. Get requests may be supported without the associated Set request being supported. The rest of this section describes the class-specific requests used to manipulate both audio Controls and endpoint Controls. 5.2.1 Request Layout The following paragraphs describe the general structure of the Set and Get requests. Subsequent paragraphs detail the use of the Set/Get requests for the different request types. 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 ここを編集
https://w.atwiki.jp/usb_audio/pages/65.html
原文:Audio Devices Rev. 2.0 Spec and Adopters Agreement(ZIP) Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 26 Offset Field Size Value Description 4 qNanoSeconds 8 Number Offset in nanoseconds from the beginning of the audio stream. Note Timing information is intrinsically provided by the isochronous data transport mechanism itself (packets are synchronized to the USB SOF and the number of samples per packet is an overall measure of the audio data sampling rate). However, the high resolution presentation timestamp could potentially be used to deliver more accurate instantaneous timing information to the sink or to report a (constant) delay between the moment of transport over the USB and the moment of actual rendition. Care must be taken to ensure that the information contained in the Packet Header is at all times in agreement with the implicit timing information, delivered by the isochronous streaming mechanism. Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 27 3 Adding New Audio Data Formats Adding new Audio Data Formats to this specification is achieved by proposing a fully documented Audio Data Format to the Audio Device Class Working Group. Upon acceptance, they will register the new Audio Data Format (attribute a unique bit position in the bmFormats field of the class-specific AS interface descriptor) and update this document accordingly. This process will also guarantee that new releases of generic USB audio drivers will support the newly registered Audio Data Formats. It is always possible to use vendor-specific definitions if the above procedure is considered unsatisfactory. Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 28 4 Adding New Side Band Protocols Adding new Side Band Protocols to this specification is achieved by proposing a fully documented Side Band Protocol to the Audio Device Class Working Group. Upon acceptance, they will register the new Side Band Protocol (attribute a unique SideBandProtocol constant) and update this document accordingly. This process will also guarantee that new releases of generic USB audio drivers will support the newly registered Side Band Protocols. It is always possible to use vendor-specific definitions if the above procedure is considered unsatisfactory. Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 29 Appendix A. Additional Audio Device Class Codes A.1 Format Type Codes Table A-1 Format Type Codes Format Type Code Value FORMAT_TYPE_UNDEFINED 0x00 FORMAT_TYPE_I 0x01 FORMAT_TYPE_II 0x02 FORMAT_TYPE_III 0x03 FORMAT_TYPE_IV 0x04 EXT_FORMAT_TYPE_I 0x81 EXT_FORMAT_TYPE_II 0x82 EXT_FORMAT_TYPE_III 0x83 A.2 Audio Data Format Bit Allocation in the bmFormats field A.2.1 Audio Data Format Type I Bit Allocations Table A-2 Audio Data Format Type I Bit Allocations Name bmFormats PCM D0 PCM8 D1 IEEE_FLOAT D2 ALAW D3 MULAW D4 Reserved. Must be set to 0. D30..D5 TYPE_I_RAW_DATA D31 A.2.2 Audio Data Format Type II Bit Allocations Table A-3 Audio Data Format Type II Bit Allocations Name bmFormats MPEG D0 Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 30 Name bmFormats AC-3 D1 WMA D2 DTS D3 Reserved. Must be set to 0. D30..D4 TYPE_II_RAW_DATA D31 A.2.3 Audio Data Format Type III Bit Allocations Table A-4 Audio Data Format Type III Bit Allocations Name bmFormats IEC61937_AC-3 D0 IEC61937_MPEG-1_Layer1 D1 IEC61937_MPEG-1_Layer2/3 or IEC61937_MPEG-2_NOEXT D2 IEC61937_MPEG-2_EXT D3 IEC61937_MPEG-2_AAC_ADTS D4 IEC61937_MPEG-2_Layer1_LS D5 IEC61937_MPEG-2_Layer2/3_LS D6 IEC61937_DTS-I D7 IEC61937_DTS-II D8 IEC61937_DTS-III D9 IEC61937_ATRAC D10 IEC61937_ATRAC2/3 D11 TYPE_III_WMA D12 Reserved. Must be set to 0. D31..D13 A.2.4 Audio Data Format Type IV Bit Allocations Table A-5 Audio Data Format Type IV Bit Allocations Name bmFormats PCM D0 1 - 6 - 11 - 16 - 21 - 26 - 31 ここを編集
https://w.atwiki.jp/usb_audio/pages/28.html
原文:Audio Terminal Types 1.0(PDF) USB Device Class Definition for Terminal Types Release 1.0 March 18, 1998 11 Terminal Type Code I/O Description MiniDisk 0x0706 I/O Minidisk player. Analog Tape 0x0707 I/O Analog Audio Tape. Phonograph 0x0708 I Analog vinyl record player. VCR Audio 0x0709 I Audio track of VCR. Video Disc Audio 0x070A I Audio track of VideoDisc player. DVD Audio 0x070B I Audio track of DVD player. TV Tuner Audio 0x070C I Audio track of TV tuner. Satellite Receiver Audio 0x070D I Audio track of satellite receiver. Cable Tuner Audio 0x070E I Audio track of cable tuner. DSS Audio 0x070F I Audio track of DSS receiver. Radio Receiver 0x0710 I AM/FM radio receiver. Radio Transmitter 0x0711 O AM/FM radio transmitter. Multi-track Recorder 0x0712 I/O A multi-track recording system. Synthesizer 0x0713 I Synthesizer. USB Device Class Definition for Terminal Types Release 1.0 March 18, 1998 12 3 Adding New Terminal Types Adding new Terminal Types to this specification is achieved by proposing a fully documented Terminal Type to the Audio Device Class Working Group. Upon acceptance, the group will register the new Terminal Type (attribute a unique Terminal Type Code) and update this document accordingly. This process will also guarantee that new releases of generic USB audio drivers will support the newly registered Terminal Types. It is always possible to use vendor-specific definitions if the above procedure is considered unsatisfactory. 1 - 6 - 11 ここを編集
https://w.atwiki.jp/potyolove3/pages/56.html
PLUGIN 1/22 NEW CBF_SenderCamouflage コマンドブロックPLUGIN 1/22 NEW ColorTeaming PVPチーム分けPLUGIN 1/3 dynmap ウェブ上マップ表示PLUGIN 1/3 DynmapCBBridge DynmapのMODとPLUGINの架け橋PLUGIN 1/25 NEW GatyaPon ガチャポンPLUGIN 1/3 Modifyworld 管理用PermissionsExの前提PLUGIN 1/3 MoreSounds ログインログアウト時などに音を鳴らすPLUGIN 1/3 Multiverse-Core 管理用ディメンション追加PLUGIN 1/17 NEW OpenInv 管理用インベントリチェックPLUGIN 1/3 PermissionsEx 管理用権限追加PLUGIN 1/3 PluginReloader 管理用鯖を建てたままPLUGINの再起動ができるPLUGIN 1/26 NEW RedstoneSensor プレイヤー感知PLUGIN 1/24 NEW Spawner スポナーブロックPLUGIN 1/3 Stargate ディメンション間ワープゲートPLUGIN 1/17 NEW WirelessRedstone ワイヤレスレッドストーンPLUGIN 1/3 WorldBorder 管理用ディメンションの広さ制限PLUGIN 1/3 WorldEdit 管理用ワールドエディットPLUGIN 1/30 NEW worldguard 管理用ワールド保護PLUGIN
https://w.atwiki.jp/usb_audio/pages/59.html
原文:Audio Devices Rev. 2.0 Spec and Adopters Agreement(ZIP) USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 21 , e.g. piano, guitar, synthesizer, drum machine, etc. he expectation is of audio devices, such as a mixer panel. th e is sufficiently different from the above descriptions as to be ction Category Codes” of this specification. l ot 1 A C c a c t In g me audio function. 3.1 as defi sync y cloc o a c speed endpoi hat occurs at the beginning of every microframe to l lationship between different physical audio channels. Indeed, the virtual spatial position of an audio source is directly related to and influenced by the phase differences that are applied to the different physical audio channels used to reproduce the audio source. Therefore, it is imperative that USB audio functions respect the phase relationship among all related audio channels. However, the responsibility for maintaining the phase • Musical Instrument A musical instrument • Pro-Audio A device not typically used by consumers of audio, e.g. editing equipment, multitrack recording equipment, etc. •Audio/Video The audio from a device that also supplies simultaneous video where t that the audio is tightly coupled to the video, e.g. a camcorder, a DVD player, a television, etc. • Control Panel A device that is used to control the flow of audio through a system • Oer Any device whose primary purposconsidered a completely different form of device. The assigned codes can be found in Appendix A.7, “Audio Fun Alher Category codes are unused and reserved by this specification for future use. 0 Clock Domains 3. lock Domain is defined as a zone within which all sampling clocks are derived from the same master synchronous and their timing clok. Therefore, within the same Clock Domain, all sampling clocks are reltionship is constant. However, the sampling clocks can be at different sampling frequencies. The master clok can be generated in many different ways. An internal crystal could be the master clock, the USB star of frame (SOF) could be used or even an externally supplied clock could serve as a master clock. eneral, multiple different Clock Domains can exist within the sa 1 Audio Synchronization Types Each isochronous audio endpoint used in an AudioStreaming interface belongs to a synchronization type ned in Section 5 of the USB Specification. The following sections briefly describe the possible hronization types. 3.11.1 Asynchronous Asnchronous isochronous audio endpoints produce or consume data at a rate that is locked either to a k external to the USB or to a free-running internal clock. These endpoints cannot be synchronized t start of frame (SOF) or to any other clock in the USB domain. 3.11.2 Synchronous The clock system of synchronous isochronous audio endpoints can be controlled externally through SOF synhronization. Such an endpoint must lock its sample clock to the 1ms SOF tick. Optionally, a high-nt could lock its clock to the 125 μs SOF t improve accuracy. 3.11.3 Adaptive Adaptive isochronous audio endpoints are able to source or sink data at any rate within their operating range. This implies that these endpoints must run an internal process that allows them to match their naturadata rate to the data rate that is imposed at their interface. 3.12 Inter Channel Synchronization An important issue when dealing with audio, and 3-D audio in particular, is the phase re USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 22 dware, and all of the audio peripheral devices or l delay essed in number of (micro)frames and is due to the fact that the audio function must buffer at least one (micro)frame worth of samples to effectively remove Furthermore, some audio functions will introduce extra delay because and process the audio data streams (for example, compression and t of (micro)frame n) is the first sample of the packet it sends over USB during (micro)frame (n+δ). δ is the audio function’s internal delay expressed pplies for an audio sink function. The first sample in the packet, received ust be the first sample that is fully reproduced during (micro)frame ernal delays of all audio functions involved. Clock Entities and they are used to describe and manipulate the clock signals inside the audio function. inal e. ed together is a guarantee (by construction) that the protocol and format, used over these e Unit. Likewise, there is a Terminal descriptor (TD) for every Terminal in the audio function. In addition, these descriptors provide all e audio function. They fully describe how Terminals and relationis shared among the USB host software, har functions. To provide a manageable phase model to the host, an audio function is required to report its internafor every AudioStreaming interface. This delay is expr packet jitter within a (micro)frame. they need time to correctly interpret decompression). However, it is required that an audio function introduces only an integer number of (micro)frames of delay. In the case of an audio source function, this implies that the audio function must guarantee that the first sample it fully acquires after SOFn (star in (micro)frames. The same rule aover USB during (micro)frame n, m (n+δ). By following these rules, phase jitter is limited to ±1 audio sample. It is up to the host software to synchronize the different audio streams by scheduling the correct packets at the correct moment, taking into account the int 3.13 Audio Function Topology To be able to manipulate the physical properties of an audio function, its functionality must be divided into addressable Entities. Two types of such generic Entities are identified and are called Units and Terminals. In addition, a special type of Entity is defined. These Entities are called Units pvide the basic building blocks to fully descri robe most audio functions. Audio functions are built by connecting together several of these Units. A Unit has one or more Input Pins and a single Output Pin, where each Pin represents a cluster of logical audio channels inside the audio function (see Section 3.13.1, “Audio Channel Cluster”). Units are wired together by connecting their I/O Pins according to the required topology. Note that it is perfectly legal to connect the Output Pin of an Entity to multiple Input Pins residing on different other Entities, effectively creating a one-to-many connection. In addition, the concept of a Terminal is introduced. There are two types of Terminals. An Input Term (IT) is an Entity that represents a starting point for audio channels inside the audio function. An Output Terminal (OT) represents an ending point for audio channels. From the audio function’s perspective, a USB endpoint is a typical example of an Input or Output Terminal. It either provides data streams to the audio function (IT) or consumes data streams coming from the audio function (OT). Likewise, a Digital toAnalog converter, built into the audio function is represented as an Output Terminal in the audio function’smodel. Connection to the Terminal is made through its single Input or Output Pin. Input Pins of a Unit are numbered starting from one up to the total number of Input Pins on the Unit. The Output Pin number is always one. Input Terminals have only one Output Pin and its number is always onOutput Terminals have only one Input Pin and it is always numbered one. The information, traveling over I/O Pins is not necessarily of a digital nature. It is perfectly possible to use the Unit model to describe fully analog or even hybrid audio functions. The mere fact that I/O Pins are connect connections (analog or digital), is compatible on both ends. Every Unit in the audio function is fully described by its associated Unit descriptor (UD). The Unit descriptor contains all necessary fields to identify and describe th necessary information about the topology of thUnits are interconnected. This specification describes the following eight different types of standard Units and Terminals that are considered adequate to represent most audio functions available today and in the near future USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 23 re side le Clock Output pin. Clock Input Pins are lock gnal at en or Q. The values P and Q are fixed for a given Clock Multiplier. The new clock u f Clock Source, Clock Selector, and Clock Multiplier Entities, the most complex c nted and exposed to Host software. oc t Pins are fundamentally different from Input and Output Pins defined for Units and rry only clock signals and therefore cannot be connected to Unit or Terminal hey are only used to express clock circuitry topology. a single Clock Input Pin that is connected to a Clock Output Pin of a oc nal carried by that Clock Output Pin determines at which sampling frequency h y the Terminal is operating. ncies between which the Sampling Rate Converter Unit is converting. ch cribed by a Clock Entity descriptor (CED). The Clock Entity descriptor contains n tify and describe the Clock Entity. e etailed in Section 4, “Descriptors” of this document. at eric audio driver should be able to fully control the c • Input Terminal (IT) • Output Terminal (OT) • Mixer Unit (MU) • Selector Unit (SU) • Feature Unit (FU) • Sampling Rate Converter Unit • Effect Unit (EU) • Processing Unit (PU) • Extension Unit (XU) Besides Units and Terminals, the concept of a Clock Entity is introduced. Three types of Clock Entities adefined by this specification • Clock Source (CS) • Clock Selector (CX) • Clock Multiplier (CM) A Clock Source provides a certain sampling clock frequency to all or part of the audio function. A Clock Source can represent an internal sampling frequency generator, but it can also represent an external sampling clock signal input to the audio function. A Clock Source has a single Clock Output Pin that carries the sampling clock signal, represented by the Clock Source. The Clock Output Pin number is always one. A Clock Selector is used to select between multiple sampling clock signals that might be available inan audio function. It has multiple Clock Input Pins and a sing numbered starting from one up to the total number of Clock Input Pins on the Clock Selector. The COutput Pin number is always one. A Clock Multiplier is used to derive a new clock signal with a different frequency from the clock siits single Clock Input Pin. It does this by multiplying that clock signal frequency by a numerator P and thdividing it by a denominat signal is guaranteed to be synchronous with the input clock signal. A Clock Multiplier has one Input Pin and one Output Pin and their numbers are always one. Bysing a combination o clok systems can be represe Clk Input and Outpu Terminals. Clock Pins ca Input and Output Pins. T Each Input and Output Terminal has Clk Entity. The cloc k sigtheardware represented b Each Sampling Rate Converter Unit has two Clock Input Pins that are typically connected to the Clock Output Pins of two different Clock Entities. The clock signals carried by those Clock Output Pins determine the sampling freque Ea Clock Entity is des allecessary fields to iden Th descriptors are further d The ensemble of Unit descriptors, Terminal descriptors and Clock Entity descriptors provide a full description of the audio function to the Host. This information is typically retrieved from the device enumeration time. By parsing the descriptors, a gen audio function, except for the functionality represented by Extension Units. Those require vendor-specifiextensions to the audio class driver. USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 24 software must be notified of these changes to remain ‘in sync’ with the ck Sources, a Clock Selector, and two Clock onnected into the overall topology of the the onnector on the audio tion of a Headphone Out jack on the audio device. 6 lt is o nto the audio device and OT 11 could rding purposes. z for ut cy of 48 kHz to OT 9 for driving the headphone. Since all sampling freq he audio function are at all times derived from a single master clock (internal or external a The descri Entity is. F external co t indicates th the Output Pin of IT 1, Input Pin 2 is connected to the Output Important Note The complete set of audio function descriptors provides only a static initial description of the audio function. During operation, a number of events can happen that force the audio function to change its state. Host audio function at all times. An extensive interrupt mechanism is in place to report any and all state changes to Host software. Figure 3-2, “Inside the Audio Function” illustrates the concepts defined above. Using the iconic symbols defined further, it describes a hypothetical audio function that incorporates 16 Entities three Input Terminals, five Units, three Output Terminals, two Clo Multipliers. Each Entity has its unique ID (from 1 to 16) and descriptor that fully describes the functionality of the Entity and also how that particular Entity is c audio function. Input Terminal 1 (IT 1) could be the representation of a USB OUT endpoint used to stream audio fromHost to the audio device. IT 2 could be the representation of an analog Line-In c device whereas IT 3 could be an analog Microphone-In connector on the audio device. Selector Unit 4 (SU4) selects between the audio coming from the Host and the audio present at the Line In connector. Feature Unit 5 (FU 5) is then used to manipulate the audio (Volume, Bass, Treble …) before it is presented to Output Terminal 9 (OT 9). OT 9 could be the representa At the same time, all three input sources (USB OUT, Line In, and Mic In) are connected to a Mixer Unit(MU 6) that effectively mixes the three sources together. The output of the Mixer is then fed into a Processing Unit 7 (PU 7) that could perform some audio processing algorithm(s) on the mix. The resu in turn sent to FU 8 where some final adjustments to the audio (Volume …) are made. FU 8 is connected tOT 10 and OT 11. OT 10 could represent speakers incorporated i represent a USB IN endpoint used to send the processed audio to the Host for reco Clock Source 12 (CS 12) could represent an internal sampling frequency generator, running at 96 kHinstance. Clock Source 15 (CS 15) could be the representation of an external master sampling clock inpthat can be used to synchronize the device to an external source. Clock Selector 13 (CS 13) enables selection between the two available Clock Sources. The output of CS 13 provides a sampling frequency to IT 1, IT 2, IT3, OT 10, and OT 11 of 96 kHz. Clock Multiplier CM 14 further multiplies that clock signal by 0.5, providing a sampling frequen uencies used inside t ), ll audio streams in the audio function are synchronous. ptors, associated with each Entity clearly indicate to the Host what the exact nature of each or instance, the IT 2 descriptor contains a field that indicates to the Host that it represents an nnector on the device, used as an analog Line In. Likewise, the MU 6 descriptor has a field thaat its Input Pin 1 is connected to Pin of IT 2, and Input Pin 3 is connected to the Output Pin of IT 3. For further details on descriptor contents, refer to Section 4, “Descriptors” of this document. USB Device Class Definition for Audio Devices Audio Function Release 2.0 May 31, 2006 25 PU SU Descr.FU Descr.MU Descr Selector UnitFeature UnitMixer UnitProcessing Unit Feature Unit PU Descr.FU Descr.1234567891011USB OUTAnalogLine INOUT AnalogMic INHeadphone Speakers USB IN 12 Clock Source Clock Multiplier 14 15 ITITITIT Descr.IT Descr.IT Descr.OTOT OT OT Descr.OT Descr.OT Descr. Clock Source Clock Selector 13 CSD CSD CXD P/Q CMD Figure 3-2 Inside the Audio Function Inside an Entity, functionality is further described through Audio Controls. A Control typically provides access to a specific audio or clock property. Each Control has a set of attributes that can be manipulated or that present additional information on the behavior of the Control. A Control can have the following attributes • Current setting attribute • Range attribute triplet consisting of • Minimum setting attribute • Maximum setting attribute • Resolution attribute As an example, consider a Volume Control inside a Feature Unit. By issuing the appropriate Get requests, the Host software can obtain values for the Volume Control’s attributes and, for instance, use them to correctly display the Control on the screen. Setting the Volume Control’s current attribute allows the Host software to change the volume setting of the Volume Control. Additionally, each Entity in an audio function can have a memory space attribute. This attribute optionally provides generic access to the internal memory space of the Entity. This could be used to implement vendor-specific control of an Entity through generically provided access. information. Inside the audio function, complete abstraction is made of the actual physical representation 3.13.1 Audio Channel Cluster An audio channel cluster is a grouping of audio channels that carry tightly related synchronous audio 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 - 131 - 136 - 141 ここを編集
https://w.atwiki.jp/usb_audio/pages/40.html
原文:Audio Device Document 1.0(PDF) USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 56 Extension Unit is not available in this case because it is bypassed). Default behavior is assumed when set to off. In the case of a single Input Pin, logical channels that enter the Extension Unit are passed unaltered for those channels that are also present in the output cluster. Logical channels not available in the output cluster are absorbed by the Extension Unit. Logical channels present in the output cluster but unavailable in the input cluster are muted. In case of multiple Input Pins, corresponding logical input channels are equally mixed together before being passed to the output. An index to a string descriptor is provided to further describe the Extension Unit. The following table outlines the Extension Unit descriptor. Table 4-15 Extension Unit Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 13+p+n 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant EXTENSION_UNIT descriptor subtype. 3 bUnitID 1 Number Constant uniquely identifying the Unit within the audio function. This value is used in all requests to address this Unit. 4 wExtensionCode 2 Constant Vendor-specific code identifying the Extension Unit. 6 bNrInPins 1 Number Number of Input Pins of this Unit p 7 baSourceID(1) 1 Number ID of the Unit or Terminal to which the first Input Pin of this Extension Unit is connected. … … … … … 7+(p-1) baSourceID (p) 1 Number ID of the Unit or Terminal to which the last Input Pin of this Extension Unit is connected. 7+p bNrChannels 1 Number Number of logical output channels in the audio channel cluster of the Extension Unit. 7+p+1 wChannelConfig 2 Bitmap Describes the spatial location of the logical channels in the audio channel cluster of the Extension Unit. 7+p+3 iChannelNames 1 Index Index of a string descriptor, describing the name of the first logical channel in the audio channel cluster of the Extension Unit. 11+p bControlSize 1 Number Size, in bytes, of the bmControls field n USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 57 Offset Field Size Value Description 12+p bmControls n Bitmap A bit set to 1 indicates that the mentioned Control is supported D0 Enable Processing D1..(n*8-1) Reserved 12+p+n iExtension 1 Index Index of a string descriptor, describing this Extension Unit. 4.3.2.8 Associated Interface Descriptor The Associated Interface descriptor provides a means to indicate a relationship between a Terminal or a Unit and an interface, external to the audio function. It directly follows the Entity descriptor to which it is related. The bInterfaceNr field contains the interface number of the associated interface. The remainder of the descriptor depends both on the Entity to which it is related and on the interface class of the target interface. At this moment, no specific layouts are defined by this specification. The following table outlines the Associated Interface descriptor. Table 4-16 Associated Interfaces Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 4+x 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant ASSOC_INTERFACE descriptor subtype. 3 bInterfaceNr 1 Number The interface number of the associated interface. 4 Association-specific x Number Association-specific extension to the open-ended descriptor. 4.4 AudioControl Endpoint Descriptors The following sections describe all possible endpoint-related descriptors for the AudioControl interface. 4.4.1 AC Control Endpoint Descriptors 4.4.1.1 Standard AC Control Endpoint Descriptor Because endpoint 0 is used as the AudioControl control endpoint, there is no dedicated standard control endpoint descriptor. 4.4.1.2 Class-Specific AC Control Endpoint Descriptor There is no dedicated class-specific control endpoint descriptor. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 58 4.4.2 AC Interrupt Endpoint Descriptors 4.4.2.1 Standard AC Interrupt Endpoint Descriptor The interrupt endpoint descriptor is identical to the standard endpoint descriptor defined in Section 9.6.4, “Endpoint,” of the USB Specification and further expanded as defined in the Universal Serial Bus Class Specification. Its fields are set to reflect the interrupt type of the endpoint. This endpoint is optional. The following table outlines the standard AC Interrupt Endpoint descriptor. Table 4-17 Standard AC Interrupt Endpoint Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 9 1 bDescriptorType 1 Constant ENDPOINT descriptor type 2 bEndpointAddress 1 Endpoint The address of the endpoint on the USB device described by this descriptor. The address is encoded as follows D7 Direction. 1 = IN endpoint D6..4 Reserved, reset to zero D3..0 The endpoint number, determined by the designer. 3 bmAttributes 1 Bit Map D3..2 Synchronization type 00 = None D1..0 Transfer type 11 = Interrupt All other bits are reserved. 4 wMaxPacketSize 2 Number Maximum packet size this endpoint is capable of sending or receiving when this configuration is selected. Used here to pass 2-byte status information. Set to 2 if not shared, set to the appropriate value if shared. 6 bInterval 1 Number Left to the designer’s discretion. A value of 10 ms or more seems sufficient. 7 bRefresh 1 Number Reset to 0. 8 bSynchAddress 1 Endpoint Reset to 0. 4.4.2.2 Class-Specific AC Interrupt Endpoint Descriptor There is no class-specific AudioControl interrupt endpoint descriptor. 4.5 AudioStreaming Interface Descriptors The AudioStreaming (AS) interface descriptors contain all relevant information to characterize the AudioStreaming interface in full. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 59 4.5.1 Standard AS Interface Descriptor The standard AS interface descriptor is identical to the standard interface descriptor defined in Section 9.6.3, “Interface,” of the USB Specification, except that some fields now have dedicated values. Table 4-18 Standard AS Interface Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 9 1 bDescriptorType 1 Constant INTERFACE descriptor type 2 bInterfaceNumber 1 Number Number of interface. A zero-based value identifying the index in the array of concurrent interfaces supported by this configuration. 3 bAlternateSetting 1 Number Value used to select an alternate setting for the interface identified in the prior field. 4 bNumEndpoints 1 Number Number of endpoints used by this interface (excluding endpoint 0). 5 bInterfaceClass 1 Class AUDIO Audio Interface Class code (assigned by the USB). See Section A.1, “Audio Interface Class Code.” 6 bInterfaceSubClass 1 Subclass AUDIO_STREAMING Audio Interface Subclass code. Assigned by this specification. See Section A.2, “Audio Interface Subclass Codes.” 7 bInterfaceProtocol 1 Protocol Not used. Must be set to 0. 8 iInterface 1 Index Index of a string descriptor that describes this interface. 4.5.2 Class-Specific AS Interface Descriptor The bTerminalLink field contains the unique Terminal ID of the Input or Output Terminal to which this interface is connected. The bDelay field holds a value that is a measure for the delay that is introduced in the audio data stream due to internal processing of the signal within the audio function. The Host software can take this value into account when phase relations between audio streams, processed by different audio functions, are important. The wFormatTag field holds information about the Audio Data Format that should be used when communicating with this interface. If the interface has a USB isochronous endpoint associated with it, the wFormatTag field describes the Audio Data Format that should be used when exchanging data with this endpoint. If the interface has no endpoint, the wFormatTag field describes the Audio Data Format that is used on the (external) connection this interface represents. This specification defines a number of standard Formats, ranging from Mono 8-bit PCM to MPEG2 7.1 encoded audio streams. A complete list of supported Audio Data Formats is provided in a separate document, USB Audio Data Formats, that is considered part of this specification. Further specific USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 60 information concerning the Audio Data Format for this interface is reported in a separate type-specific descriptor, see Section 4.5.3, “Class-Specific AS Format Type Descriptor.” This can optionally be supplemented by format-specific information through a format-specific descriptor, see Section 4.5.4, “Class-Specific AS Format-Specific Descriptor.” Table 4-19 Class-Specific AS Interface Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor in bytes 7 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant AS_GENERAL descriptor subtype. 3 bTerminalLink 1 Constant The Terminal ID of the Terminal to which the endpoint of this interface is connected. 4 bDelay 1 Number Delay (d) introduced by the data path (see Section 3.4, “Inter Channel Synchronization”). Expressed in number of frames. 5 wFormatTag 2 Number The Audio Data Format that has to be used to communicate with this interface. 4.5.3 Class-Specific AS Format Type Descriptor The wFormatTag field in the class-specific AS Interface Descriptor implicitly indicates which Format Type should be used to communicate with the connection (USB or external) this interface represents. (Each Audio Data Format belongs to a certain Format Type as outlined in USB Audio Data Formats.) Each Format Type has a specific Format Type descriptor associated with it. This class-specific AS Format Type descriptor follows the class-specific AS interface descriptor and delivers format type-specific information to the Host. The details and layout of this descriptor for each of the supported Format Types is found in USB Audio Data Formats. 4.5.4 Class-Specific AS Format-Specific Descriptor As stated earlier, the wFormatTag field in the class-specific AS Interface Descriptor not only describes to what Format Type the interface belongs. It also states exactly what Audio Data Format should be used to communicate with the connection (USB or external) this interface represents. Some Audio Data Formats need additional format-specific information conveyed to the Host. Therefore, the Format Type descriptor may be followed by a class-specific AS format-specific descriptor. The details and layout of this descriptor for the Audio Data Formats that need it, is outlined in USB Audio Data Formats. 4.6 AudioStreaming Endpoint Descriptors The following sections describe all possible endpoint-related descriptors for the AudioStreaming interface. 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 ここを編集